Goto

Collaborating Authors

 Anchorage Municipality






A Granular Framework for Construction Material Price Forecasting: Econometric and Machine-Learning Approaches

Lyu, Boge, Yin, Qianye, Tommelein, Iris Denise, Liu, Hanyang, Ranka, Karnamohit, Yeluripati, Karthik, Shi, Junzhe

arXiv.org Artificial Intelligence

This study develops a forecasting framework t hat leverages the Construction Specifications Institute (CSI) MasterFormat as the target data structure, enabling predictions at the six - digit section level and supporting detailed cost projections across a wide spectrum of building materials. To enhance p redictive accuracy, the framework integrates explanatory variables such as raw material prices, commodity indexes, and macroeconomic indicators. Four time - series models, Long Short - Term Memory (LSTM), Autoregressive Integrated Moving Average (ARIMA), Vecto r Error Correction Model (VECM), and Chronos - Bolt, were evaluated under both baseline configurations (using CSI data only) and extended versions with explanatory variables. Results demonstrate that incorporating explanatory variables significantly improves predictive performance across all models. Among the tested approaches, the LSTM model consistently ach ieved the highest accuracy, with RMSE values as low as 1.390 and MAPE values of 0.957, representing improvements of up to 59 % over traditional statistical time - series model, ARIMA. Validation across multiple CSI divisions confirmed the framework's scalability, while Division 06 (Wood, Plastics, and Composites) is presented in detail as a demonstration case. This research offers a robust methodology that enables owners and contractors to improve budgeting practices and achieve more reliable cost estimation at the Definitive level. INTRODUCTION 1.1 Motivation The construction industry continues to demonstrate steady long - term growth, with global activity projected to reach US$9.8 trillion by 2026 [1] . Major upcoming programs in the United States, such as the Los Angeles 2028 Olympics and TSMC's fabrication facility in Arizona [2] [3], highlight the scale of high - value projects in the near future. However, volatility in construction material prices has emerged as a critical challenge, creating significant uncertainty for contractors in project planning, budgeting, and cost management. Price fluctuations, driven by raw material costs, macroeconomic conditions such as inflation and interest rates, and supply - demand imbalances, have amplified risks of cost overruns and delays [4] [5] [6] [7] [8] . Traditional econometric methods (i.e.,multiple regression analysis) and modern econometric methods (i.e., univariate, and multivariate time series methods) have faced limitations in effectively capturing the high - frequency volatility observed in constructi on material prices [9] . These models often struggle to handle the complexity of input data and exhibit limited predictive accuracy in real - world applications.


QuanvNeXt: An end-to-end quanvolutional neural network for EEG-based detection of major depressive disorder

Orka, Nabil Anan, Haque, Ehtashamul, Jannat, Maftahul, Awal, Md Abdul, Moni, Mohammad Ali

arXiv.org Artificial Intelligence

This study presents QuanvNeXt, an end-to-end fully quanvolutional model for EEG-based depression diagnosis. QuanvNeXt incorporates a novel Cross Residual block, which reduces feature homogeneity and strengthens cross-feature relationships while retaining parameter efficiency. We evaluated QuanvNeXt on two open-source datasets, where it achieved an average accuracy of 93.1% and an average AUC-ROC of 97.2%, outperforming state-of-the-art baselines such as InceptionTime (91.7% accuracy, 95.9% AUC-ROC). An uncertainty analysis across Gaussian noise levels demonstrated well-calibrated predictions, with ECE scores remaining low (0.0436, Dataset 1) to moderate (0.1159, Dataset 2) even at the highest perturbation (ε = 0.1). Additionally, a post-hoc explainable AI analysis confirmed that QuanvNeXt effectively identifies and learns spectrotemporal patterns that distinguish between healthy controls and major depressive disorder. Overall, QuanvNeXt establishes an efficient and reliable approach for EEG-based depression diagnosis.


Quantum Autoencoder for Multivariate Time Series Anomaly Detection

Tscharke, Kilian, Wendlinger, Maximilian, Ahouzi, Afrae, Bhardwaj, Pallavi, Amoi-Taleghani, Kaweh, Schrödl-Baumann, Michael, Debus, Pascal

arXiv.org Artificial Intelligence

--Anomaly Detection (AD) defines the task of identifying observations or events that deviate from typical - or normal - patterns, a critical capability in IT security for recognizing incidents such as system misconfigurations, malware infections, or cyberattacks. In enterprise environments like SAP HANA Cloud systems, this task often involves monitoring high-dimensional, multivariate time series (MTS) derived from telemetry and log data. One approach is the Quantum Autoencoder (QAE), an emerging and promising method with potential for application in both data compression and AD. However, prior applications of QAEs to time series AD have been restricted to univariate data, limiting their relevance for real-world enterprise systems. In this work, we introduce a novel QAE-based framework designed specifically for MTS AD towards enterprise scale. We theoretically develop and experimentally validate the architecture, demonstrating that our QAE achieves performance competitive with neural-network-based autoencoders while requiring fewer trainable parameters. We evaluate our model on datasets that closely reflect SAP system telemetry and show that the proposed QAE is a viable and efficient alternative for semisupervised AD in real-world enterprise settings. Anomaly Detection (AD) refers to the process of identifying patterns or events that deviate from typical - or normal - behavior [1]. It plays a critical role in IT security and many other domains, as anomalies often correspond to potential security breaches, frauds, or system failures [2], [3]. Modern enterprise infrastructure, such as SAP HANA Cloud and other large scale cloud native applications, rely on continuous monitoring to ensure optimal performance, availability, and reliability. With increasing system complexity and scale, observability platforms generate large volumes of telemetry data, including structured multivariate time series (MTS) and unstructured log streams.


QJoin: Transformation-aware Joinable Data Discovery Using Reinforcement Learning

Wang, Ning, Galhotra, Sainyam

arXiv.org Artificial Intelligence

Discovering which tables in large, heterogeneous repositories can be joined and by what transformations is a central challenge in data integration and data discovery. Traditional join discovery methods are largely designed for equi-joins, which assume that join keys match exactly or nearly so. These techniques, while efficient in clean, well-normalized databases, fail in open or federated settings where identifiers are inconsistently formatted, embedded, or split across multiple columns. Approximate or fuzzy joins alleviate minor string variations but cannot capture systematic transformations. We introduce QJoin, a reinforcement-learning framework that learns and reuses transformation strategies across join tasks. QJoin trains an agent under a uniqueness-aware reward that balances similarity with key distinctiveness, enabling it to explore concise, high-value transformation chains. To accelerate new joins, we introduce two reuse mechanisms: (i) agent transfer, which initializes new policies from pretrained agents, and (ii) transformation reuse, which caches successful operator sequences for similar column clusters. On the AutoJoin Web benchmark (31 table pairs), QJoin achieves an average F1-score of 91.0%. For 19,990 join tasks in NYC+Chicago open datasets, Qjoin reduces runtime by up to 7.4% (13,747 s) by using reusing. These results demonstrate that transformation learning and reuse can make join discovery both more accurate and more efficient.


Developing a Comprehensive Framework for Sentiment Analysis in Turkish

Aydin, Cem Rifki

arXiv.org Artificial Intelligence

In this thesis, we developed a comprehensive framework for sentiment analysis that takes its many aspects into account mainly for Turkish. We have also proposed several approaches specific to sentiment analysis in English only. We have accordingly made five major and three minor contributions. We generated a novel and effective feature set by combining unsupervised, semi-supervised, and supervised metrics. We then fed them as input into classical machine learning methods, and outperformed neural network models for datasets of different genres in both Turkish and English. We created a polarity lexicon with a semi-supervised domain-specific method, which has been the first approach applied for corpora in Turkish. We performed a fine morphological analysis for the sentiment classification task in Turkish by determining the polarities of morphemes. This can be adapted to other morphologically-rich or agglutinative languages as well. We have built a novel neural network architecture, which combines recurrent and recursive neural network models for English. We built novel word embeddings that exploit sentiment, syntactic, semantic, and lexical characteristics for both Turkish and English. We also redefined context windows as subclauses in modelling word representations in English. This can also be applied to other linguistic fields and natural language processing tasks. We have achieved state-of-the-art and significant results for all these original approaches. Our minor contributions include methods related to aspect-based sentiment in Turkish, parameter redefinition in the semi-supervised approach, and aspect term extraction techniques for English. This thesis can be considered the most detailed and comprehensive study made on sentiment analysis in Turkish as of July, 2020. Our work has also contributed to the opinion classification problem in English.


ICAD-LLM: One-for-All Anomaly Detection via In-Context Learning with Large Language Models

Wu, Zhongyuan, Wang, Jingyuan, Cheng, Zexuan, Zhou, Yilong, Wang, Weizhi, Pu, Juhua, Li, Chao, Ma, Changqing

arXiv.org Artificial Intelligence

Anomaly detection (AD) is a fundamental task of critical importance across numerous domains. Current systems increasingly operate in rapidly evolving environments that generate diverse yet interconnected data modalities -- such as time series, system logs, and tabular records -- as exemplified by modern IT systems. Effective AD methods in such environments must therefore possess two critical capabilities: (1) the ability to handle heterogeneous data formats within a unified framework, allowing the model to process and detect multiple modalities in a consistent manner during anomalous events; (2) a strong generalization ability to quickly adapt to new scenarios without extensive retraining. However, most existing methods fall short of these requirements, as they typically focus on single modalities and lack the flexibility to generalize across domains. To address this gap, we introduce a novel paradigm: In-Context Anomaly Detection (ICAD), where anomalies are defined by their dissimilarity to a relevant reference set of normal samples. Under this paradigm, we propose ICAD-LLM, a unified AD framework leveraging Large Language Models' in-context learning abilities to process heterogeneous data within a single model. Extensive experiments demonstrate that ICAD-LLM achieves competitive performance with task-specific AD methods and exhibits strong generalization to previously unseen tasks, which substantially reduces deployment costs and enables rapid adaptation to new environments. To the best of our knowledge, ICAD-LLM is the first model capable of handling anomaly detection tasks across diverse domains and modalities.